Goto

Collaborating Authors

 critical technology


A.I. Is About to Get a Whole Lot Worse Under Trump

Slate

Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily. On Thursday evening, President-elect Donald Trump announced on his Truth Social platform that he would be appointing David O. Sacks--the "PayPal Mafia" alum, longtime venture capitalist, All-In Podcast co-host, Elon Musk pal, and rock-ribbed Silicon Valley conservative--as the "White House A.I. & Crypto Czar." In his statement, Trump wrote that "Sacks will focus on making America the clear global leader" in artificial intelligence and cryptocurrency, which he deemed to be "two areas critical to the future of American competitiveness." In addition, Sacks will "safeguard Free Speech online," "steer us away from Big Tech bias and censorship," and "lead the Presidential Council of Advisors for Science and Technology." For his first-ever Truth Social post, the incoming czar responded to Trump with gratitude and claimed that he "looks forward to advancing American competitiveness in these critical technologies."


SecGenAI: Enhancing Security of Cloud-based Generative AI Applications within Australian Critical Technologies of National Interest

Haryanto, Christoforus Yoga, Vu, Minh Hieu, Nguyen, Trung Duc, Lomempow, Emily, Nurliana, Yulia, Taheri, Sona

arXiv.org Artificial Intelligence

The rapid advancement of Generative AI (GenAI) technologies offers transformative opportunities within Australia's critical technologies of national interest while introducing unique security challenges. This paper presents SecGenAI, a comprehensive security framework for cloud-based GenAI applications, with a focus on Retrieval-Augmented Generation (RAG) systems. SecGenAI addresses functional, infrastructure, and governance requirements, integrating end-to-end security analysis to generate specifications emphasizing data privacy, secure deployment, and shared responsibility models. Aligned with Australian Privacy Principles, AI Ethics Principles, and guidelines from the Australian Cyber Security Centre and Digital Transformation Agency, SecGenAI mitigates threats such as data leakage, adversarial attacks, and model inversion. The framework's novel approach combines advanced machine learning techniques with robust security measures, ensuring compliance with Australian regulations while enhancing the reliability and trustworthiness of GenAI systems. This research contributes to the field of intelligent systems by providing actionable strategies for secure GenAI implementation in industry, fostering innovation in AI applications, and safeguarding national interests.


China beating West in race for critical technologies, report says

Al Jazeera

China leads the world in 37 out of 44 critical technologies, with Western democracies falling behind in the race for scientific and research breakthroughs, a report by an Australian think tank has found. China is in a position to become the world's top technology superpower, with its dominance already spanning defence, space, robotics, energy, the environment, biotechnology, artificial intelligence (AI), advanced materials and key quantum technology, according to the report by the Australian Strategic Policy Institute (ASPI). The key areas dominated by China include drones, machine learning, electric batteries, nuclear energy, photovoltaics, quantum sensors and critical minerals extraction, according to the Critical Technology Tracker released on Thursday. China's dominance in some fields is so entrenched that all of the world's top 10 leading research institutions for certain technologies are located in the country, according to ASPI. In comparison, the United States leads in just seven critical technologies, including space launch systems and quantum computing, according to ASPI, which receives funding from the Australian, United Kingdom and US governments, as well as private sector sources including the defence and tech industries.


USPTO releases report on artificial intelligence and intellectual property policy

#artificialintelligence

The United States Patent and Trademark Office (USPTO) today released a report titled "Public Views on Artificial Intelligence and Intellectual Property Policy." The new report represents the agency's firm commitment to keeping pace with this rapidly changing and critical technology in order to accelerate American innovation. "On February 11, 2019, President Trump signed Executive Order 13859 announcing the American Artificial Intelligence Initiative, our nation's strategy on artificial intelligence," said U.S. Secretary of Commerce Wilbur Ross. "As artificial intelligence technologies continue to advance, the United States will not cede leadership in global innovation. The Department of Commerce recognizes the importance of harnessing American ingenuity to advance and protect our economic security." "The USPTO has long been committed to ensuring our nation maintains its leadership in all areas of innovation, especially in emerging technologies such as artificial intelligence," said Andrei Iancu, Under Secretary of Commerce for Intellectual Property and Director of the USPTO.


AI Technology Is in the Crosshairs of National Security Restrictions -- Wiley Connect

#artificialintelligence

This article is authored by Duane Pozza, Megan Brown, and Rick Sofield. The U.S. government is increasingly focused on competition between the United States and China in the development of artificial intelligence (AI), as a national security issue. The Administration has oriented its approach to AI to position the U.S. as a leader in AI development and standards, explicitly stating that it is working with its allies in opposition to China. Given the Administration's aggressive stance on trade restrictions with China in a variety of areas, one question facing industry is how AI technology will be regulated by the U.S. government. A recent report by the congressionally formed National Security Commission on Artificial Intelligence (NSCAI) provides some insights on how the Administration – and in particular the U.S. Department of Commerce – might approach AI technology protection.


Global Big Data Conference

#artificialintelligence

On Tuesday, a number of AI researchers, ethicists, data scientists, and social scientists released a blog post arguing that academic researchers should stop pursuing research that endeavors to predict the likelihood that an individual will commit a criminal act, as based upon variables like crime statistics and facial scans. The blog post was authored by the Coalition for Critical Technology, who argued that the utilization of such algorithms perpetuates a cycle of prejudice against minorities. Many studies of the efficacy of face recognition and predictive policing algorithms find that the algorithms tend to judge minorities more harshly, which the authors of the blog post argue is due to the inequities in the criminal justice system. The justice system produces biased data, and therefore the algorithms trained on this data propagate those biases, the Coalition for Critical Technology argues. The coalition argues that the very notion of "criminality" is often based on race, and therefore research done on these technologies assumes the neutrality of the algorithms when in truth no such neutrality exists.


AI Ethics Coalition Condemn Criminality Prediction Algorithms

#artificialintelligence

On Tuesday, a number of AI researchers, ethicists, data scientists, and social scientists released a blog post arguing that academic researchers should stop pursuing research that endeavors to predict the likelihood that an individual will commit a criminal act, as based upon variables like crime statistics and facial scans. The blog post was authored by the Coalition for Critical Technology, who argued that the utilization of such algorithms perpetuates a cycle of prejudice against minorities. Many studies of the efficacy of face recognition and predictive policing algorithms find that the algorithms tend to judge minorities more harshly, which the authors of the blog post argue is due to the inequities in the criminal justice system. The justice system produces biased data, and therefore the algorithms trained on this data propagate those biases, the Coalition for Critical Technology argues. The coalition argues that the very notion of "criminality" is often based on race, and therefore research done on these technologies assumes the neutrality of the algorithms when in truth no such neutrality exists.


AI experts say research into algorithms that claim to predict criminality must end

#artificialintelligence

A coalition of AI researchers, data scientists, and sociologists has called on the academic world to stop publishing studies that claim to predict an individual's criminality using algorithms trained on data like facial scans and criminal statistics. Such work is not only scientifically illiterate, says the Coalition for Critical Technology, but perpetuates a cycle of prejudice against Black people and people of color. Numerous studies show the justice system treats these groups more harshly than white people, so any software trained on this data simply amplifies and entrenches societal bias and racism. "Let's be clear: there is no way to develop a system that can predict or identify'criminality' that is not racially biased -- because the category of'criminality' itself is racially biased," write the group. "Research of this nature -- and its accompanying claims to accuracy -- rest on the assumption that data regarding criminal arrest and conviction can serve as reliable, neutral indicators of underlying criminal activity. Yet these records are far from neutral."


AI the most critical technology for CIOs over the next five years

#artificialintelligence

Almost two-thirds of CIOs see artificial intelligence (AI) and machine learning technologies as being very important or critical to their businesses over the next five years, according to a report from Forbes, in association with VMWare. The report, which surveyed over 650 CIOs from around the world, explored how CIOs and IT leaders see their role within the business evolving over the next five years, from the types of technology they will be bringing in, to how they plan to drive social responsibility. AI and machine learning were seen as the top two critical future technologies at 62% and 60% respectively, ahead of the Internet of Things (IoT), edge computing and blockchain, which came in at just 54%. AI applications may be in their infancy, but almost half of organisations around the world are using at least one AI-powered function in their business. It's seen as a technology which will have a huge impact on companies' bottom lines as IT leaders grow more confident in their use of it. "You need to have a plan of attack for AI and machine learning," said David Gledhill, CIO and group head of technology at DBS Bank in the report.


In AI, U.S. Economic Statecraft Must Fire on all Cylinders

#artificialintelligence

On August 13, 2018, President Trump signed the Foreign Investment Risk Review Modernization Act of 2018 (FIRRMA). The law's intent is to stop China's acquisition of critical technologies -- the'crown jewels' of U.S. global power. FIRRMA expands the powers of the Committee on Foreign Investment in the United States (CFIUS) to review the national security implications of foreign investments. The CFIUS reform bill subjects more transactions to review and mandates that foreign investments in critical technologies are reported to the committee. Championed initially as a strong response to China's technology transfer strategy, FIRRMA's passage, coupled with ongoing trade and IP tensions, caused Chinese investments into U.S. technology firms to decline by 79% in 2018.